virtual reality
Dutch air force reads pilots' brainwaves to make training harder
Dutch air force reads pilots' brainwaves to make training harder Fighter pilots in training are having their brainwaves read by AI as they fly in virtual reality to measure how difficult they find tasks and ramp up the complexity if needed. Experiments show that trainee fighter pilots prefer this adaptive system to a rigid, pre-programmed alternative, but that it doesn't necessarily improve their skills. Training pilots in simulators and virtual reality is cheaper and safer than real flights, but these teaching scenarios need to be adjusted in real time so tasks sit in the sweet spot between comfort and overload. How the US military wants to use the world's largest aircraft Evy van Weelden at the Royal Netherlands Aerospace Centre, Amsterdam, and her colleagues used a brain-computer interface to read student pilots' brainwaves via electrodes attached to the scalp. An AI model analysed that data to determine how difficult the pilots were finding the task.
Meta shifts some metaverse investments to AI smart glasses
Meta is shifting some of its investments in the metaverse to AI glasses and wearables, hoping to capitalise on the momentum in that segment, a company spokesperson has said. Over the last decade, Meta has poured billions of dollars to build the metaverse, which lets people to interact in a virtual reality. However, the tech giant has struggled to convince investors of the viability of the nascent technology. Bloomberg first reported on Thursday that Meta would cut its metaverse investment by as much as 30%. Its shares climbed more than 3.4% following the news.
- North America > United States (0.17)
- North America > Central America (0.16)
- Oceania > Australia (0.08)
- (17 more...)
- Leisure & Entertainment (0.75)
- Information Technology (0.52)
- Media > Film (0.30)
How LLMs are Shaping the Future of Virtual Reality
Özkaya, Süeda, Berrezueta-Guzman, Santiago, Wagner, Stefan
The integration of Large Language Models (LLMs) into Virtual Reality (VR) games marks a paradigm shift in the design of immersive, adaptive, and intelligent digital experiences. This paper presents a comprehensive review of recent research at the intersection of LLMs and VR, examining how these models are transforming narrative generation, non-player character (NPC) interactions, accessibility, personalization, and game mastering. Drawing from an analysis of 62 peer reviewed studies published between 2018 and 2025, we identify key application domains ranging from emotionally intelligent NPCs and procedurally generated storytelling to AI-driven adaptive systems and inclusive gameplay interfaces. We also address the major challenges facing this convergence, including real-time performance constraints, memory limitations, ethical risks, and scalability barriers. Our findings highlight that while LLMs significantly enhance realism, creativity, and user engagement in VR environments, their effective deployment requires robust design strategies that integrate multimodal interaction, hybrid AI architectures, and ethical safeguards. The paper concludes by outlining future research directions in multimodal AI, affective computing, reinforcement learning, and open-source development, aiming to guide the responsible advancement of intelligent and inclusive VR systems.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- (2 more...)
New wearable device lets you touch fabric online, read braille, and more
VoxeLite can help you literally feel websites. VoxLite adds physical sensations of touch and feel to digital experiences like scrolling a smartphone. Breakthroughs, discoveries, and DIY tips sent every weekday. A time traveler visiting from an earlier era might reasonably conclude that humanity has entered the age of cyborgs and cybernetics. Pedestrians regularly walk down city streets with tiny computers in their hands and even smaller digital devices shoved in their ear canals.
Behavioral Biometrics for Automatic Detection of User Familiarity in VR
Zafar, Numan, Prosun, Priyo Ranjan Kundu, Chaudhry, Shafique Ahmad
As virtual reality (VR) devices become increasingly integrated into everyday settings, a growing number of users without prior experience will engage with VR systems. Automatically detecting a user's familiarity with VR as an interaction medium enables real-time, adaptive training and interface adjustments, minimizing user frustration and improving task performance. In this study, we explore the automatic detection of VR familiarity by analyzing hand movement patterns during a passcode-based door-opening task, which is a well-known interaction in collaborative virtual environments such as meeting rooms, offices, and healthcare spaces. While novice users may lack prior VR experience, they are likely to be familiar with analogous real-world tasks involving keypad entry. We conducted a pilot study with 26 participants, evenly split between experienced and inexperienced VR users, who performed tasks using both controller-based and hand-tracking interactions. Our approach uses state-of-the-art deep classifiers for automatic VR familiarity detection, achieving the highest accuracies of 92.05% and 83.42% for hand-tracking and controller-based interactions, respectively. In the cross-device evaluation, where classifiers trained on controller data were tested using hand-tracking data, the model achieved an accuracy of 78.89%. The integration of both modalities in the mixed-device evaluation obtained an accuracy of 94.19%. Our results underline the promise of using hand movement biometrics for the real-time detection of user familiarity in critical VR applications, paving the way for personalized and adaptive VR experiences.
- North America > United States (0.04)
- Europe > Italy (0.04)
- Europe > Germany > Brandenburg > Potsdam (0.04)
- Asia (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Leisure & Entertainment > Games > Computer Games (0.34)
Social-Physical Interactions with Virtual Characters: Evaluating the Impact of Physicality through Encountered-Type Haptics
Godden, Eric, Groenewegen, Jacquie, Wheeler, Michael, Pan, Matthew K. X. J.
This work investigates how robot-mediated physicality influences the perception of social-physical interactions with virtual characters. ETHOS (Encountered-Type Haptics for On-demand Social interaction) is an encountered-type haptic display that integrates a torque-controlled manipulator and interchangeable props with a VR headset to enable three gestures: object handovers, fist bumps, and high fives. We conducted a user study to examine how ETHOS adds physicality to virtual character interactions and how this affects presence, realism, enjoyment, and connection metrics. Each participant experienced one interaction under three conditions: no physicality (NP), static physicality (SP), and dynamic physicality (DP). SP extended the purely virtual baseline (NP) by introducing tangible props for direct contact, while DP further incorporated motion and impact forces to emulate natural touch. Results show presence increased stepwise from NP to SP to DP. Realism, enjoyment, and connection also improved with added physicality, though differences between SP and DP were not significant. Comfort remained consistent across conditions, indicating no added psychological friction. These findings demonstrate the experiential value of ETHOS and motivate the integration of encountered-type haptics into socially meaningful VR experiences.
- North America > Canada > Ontario > Kingston (0.40)
- North America > United States > North Carolina (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Health & Medicine (0.68)
- Information Technology (0.46)
ETHOS: A Robotic Encountered-Type Haptic Display for Social Interaction in Virtual Reality
Godden, Eric, Groenewegen, Jacquie, Pan, Matthew K. X. J.
ETHOS (Encountered-Type Haptics for On-demand Social interaction) enables corresponding virtual and physical renderings of dynamic interpersonal interactions, demonstrated here with an object handover (left), fist bump (centre), and high five (right). Abstract-- We present ETHOS (Encountered-Type Haptics for On-demand Social interaction), a dynamic encountered-type haptic display (ETHD) that enables natural physical contact in virtual reality (VR) during social interactions such as handovers, fist bumps, and high-fives. The system integrates a torque-controlled robotic manipulator with interchangeable passive props (silicone hand replicas and a baton), marker-based physical-virtual registration via a ChArUco board, and a safety monitor that gates motion based on the user's head and hand pose. We introduce two control strategies: (i) a static mode that presents a stationary prop aligned with its virtual counterpart, consistent with prior ETHD baselines, and (ii) a dynamic mode that continuously updates prop position by exponentially blending an initial mid-point trajectory with real-time hand tracking, generating a unique contact point for each interaction. Bench tests show static colocation accuracy of 5.09 0.94 mm, while user interactions achieved temporal alignment with an average contact latency of 28.58 31.21 These results demonstrate the feasibility of recreating socially meaningful haptics in VR. By incorporating essential safety and control mechanisms, ETHOS establishes a practical foundation for high-fidelity, dynamic interpersonal interactions in virtual environments. I. INTRODUCTION Virtual reality (VR) enables embodied engagement with digital environments and creates immersive experiences that unlock novel affordances. Advances in hardware and content creation over the past decade have driven increasing interest in the field, supporting the adoption of VR across a broad range of domains.
- North America > United States (0.05)
- North America > Canada > Ontario > Kingston (0.04)
- Europe > United Kingdom > Wales (0.04)
- (2 more...)
- North America > United States > Colorado > Adams County > Aurora (0.15)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > Canada (0.05)
- North America > United States > Iowa (0.05)
- Media (1.00)
- Leisure & Entertainment > Sports (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (3 more...)
Designing and Evaluating an AI-driven Immersive Multidisciplinary Simulation (AIMS) for Interprofessional Education
Wang, Ruijie, Lu, Jie, Pei, Bo, Jones, Evonne, Brinson, Jamey, Brown, Timothy
Interprofessional education has long relied on case studies and the use of standardized patients to support teamwork, communication, and related collaborative competencies among healthcare professionals. However, traditional approaches are often limited by cost, scalability, and inability to mimic the dynamic complexity of real-world clinical scenarios. To address these challenges, we designed and developed AIMS (AI-Enhanced Immersive Multidisciplinary Simulations), a virtual simulation that integrates a large language model (Gemini-2.5-Flash), a Unity-based virtual environment engine, and a character creation pipeline to support synchronized, multimodal interactions between the user and the virtual patient. AIMS was designed to enhance collaborative clinical reasoning and health promotion competencies among students from pharmacy, medicine, nursing, and social work. A formal usability testing session was conducted which participants assumed professional roles on a healthcare team and engaged in a mix of scripted and unscripted conversations. Participants explored the patient's symptoms, social context, and care needs. Usability issues were identified (e.g., audio routing, response latency) and used to guide subsequent refinements. Findings in general suggest that AIMS supports realistic, profession-specific and contextually appropriate conversations. We discussed both technical and pedagogical innovations of AIMS and concluded with future directions.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- (4 more...)
- Instructional Material (1.00)
- Research Report > Experimental Study (0.68)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Education > Educational Setting > Higher Education (0.47)
- Education > Educational Setting > Online (0.46)
Negative Shanshui: Real-time Interactive Ink Painting Synthesis
This paper presents Negative Shanshui, a real-time interactive AI synthesis approach that reinterprets classical Chinese landscape ink painting, i.e., shanshui, to engage with ecological crises in the Anthropocene. Negative Shanshui optimizes a fine-tuned Stable Diffusion model for real-time inferences and integrates it with gaze-driven inpainting, frame interpolation; it enables dynamic morphing animations in response to the viewer's gaze and presents as an interactive virtual reality (VR) experience. The paper describes the complete technical pipeline, covering the system framework, optimization strategies, gaze-based interaction, and multimodal deployment in an art festival. Further analysis of audience feedback collected during its public exhibition highlights how participants variously engaged with the work through empathy, ambivalence, and critical reflection.
- Asia > China > Guangdong Province > Guangzhou (0.41)
- Asia > China > Hong Kong (0.40)
- Asia > South Korea (0.14)
- (7 more...)
- Law > Environmental Law (0.47)
- Media (0.46)